Goto

Collaborating Authors

 approximate bayesian inference



Approximate Bayesian Inference for a Mechanistic Model of Vesicle Release at a Ribbon Synapse

Neural Information Processing Systems

The inherent noise of neural systems makes it difficult to construct models which accurately capture experimental measurements of their activity. While much research has been done on how to efficiently model neural activity with descriptive models such as linear-nonlinear-models (LN), Bayesian inference for mechanistic models has received considerably less attention. One reason for this is that these models typically lead to intractable likelihoods and thus make parameter inference difficult. Here, we develop an approximate Bayesian inference scheme for a fully stochastic, biophysically inspired model of glutamate release at the ribbon synapse, a highly specialized synapse found in different sensory systems. The model translates known structural features of the ribbon synapse into a set of stochastically coupled equations. We approximate the posterior distributions by updating a parametric prior distribution via Bayesian updating rules and show that model parameters can be efficiently estimated for synthetic and experimental data from in vivo two-photon experiments in the zebrafish retina. Also, we find that the model captures complex properties of the synaptic release such as the temporal precision and outperforms a standard GLM. Our framework provides a viable path forward for linking mechanistic models of neural activity to measured data.



Reparameterization invariance in approximate Bayesian inference

Neural Information Processing Systems

Current approximate posteriors in Bayesian neural networks (BNNs) exhibit a crucial limitation: they fail to maintain invariance under reparameterization, i.e. BNNs assign different posterior densities to different parametrizations of identical functions. This creates a fundamental flaw in the application of Bayesian principles as it breaks the correspondence between uncertainty over the parameters with uncertainty over the parametrized function. In this paper, we investigate this issue in the context of the increasingly popular linearized Laplace approximation. Specifically, it has been observed that linearized predictives alleviate the common underfitting problems of the Laplace approximation.


Entropy-regularized Gradient Estimators for Approximate Bayesian Inference

Kaur, Jasmeet

arXiv.org Machine Learning

Effective uncertainty quantification is important for training modern predictive models with limited data, enhancing both accuracy and robustness. While Bayesian methods are effective for this purpose, they can be challenging to scale. When employing approximate Bayesian inference, ensuring the quality of samples from the posterior distribution in a computationally efficient manner is essential. This paper addresses the estimation of the Bayesian posterior to generate diverse samples by approximating the gradient flow of the Kullback-Leibler (KL) divergence and the cross entropy of the target approximation under the metric induced by the Stein Operator. It presents empirical evaluations on classification tasks to assess the method's performance and discuss its effectiveness for Model-Based Reinforcement Learning that uses uncertainty-aware network dynamics models.


Export Reviews, Discussions, Author Feedback and Meta-Reviews

Neural Information Processing Systems

The paper uses an online approximation to MCMC to draw parameters for a Bayesian neural network. The predictive distribution under these samples is then fitted using stochastic approximation. The comparisons are to recent work on approximate Bayesian inference applied to the same models and example problems. The paper does not yet present demonstrate that these methods will push forward any particular application. The paper is a fairly natural extension of existing work.


Reviews: Approximate Inference Turns Deep Networks into Gaussian Processes

Neural Information Processing Systems

This paper demonstrates theoretically that multiple forms of approximate Bayesian inference (Laplace approximation and variational inference) for deep neural networks are equivalent to Gaussian processes. The authors formalize this connection and write out the GP covariance function corresponding to these networks, which surprisingly turns out to be the neural tangent kernel. The authors also establish a connection to the training procedure of the neural network and GPs, which is a novel contribution. There is a growing literature on the connection between neural networks and Gaussian processes, with a variety of papers establishing the connection in the infinite limit of hidden units. This paper adds nicely to that literature, developing a connection to approximate Bayesian inference.


Reviews: Approximate Bayesian Inference for a Mechanistic Model of Vesicle Release at a Ribbon Synapse

Neural Information Processing Systems

The author responses answered my questions as well as points raised by other reviewers, providing additional clarification.] This paper formulates a fully probabilistic model of the vesicle-release dynamics at the sub-cellular biophysical level in the ribbon synapse. The paper then develops a likelihood-free inference method, tests it on a synthetic dataset, and finally infers the parameters of vesicle release in the ribbon synapse from real data. Originality: The paper presents a novel combination of biophysical modeling of ribbon synapse and a likelihood-free inference of the parameters. To my knowledge, the fully stochastic modeling of the vesicle-release dynamics is itself new.


Reviews: Approximate Bayesian Inference for a Mechanistic Model of Vesicle Release at a Ribbon Synapse

Neural Information Processing Systems

This is an interesting paper on a mechanistic model of the ribbon synapse along with an ABC inference approach. Neither component is particularly novel, but the paper is thorough and compelling. The audience will likely be computationally-savvy experimental neuroscientists and those interested in applications of ABC; the former may be harder to find at NeurIPS, though they do exist. I encourage the authors to make the suggested revisions before the camera ready deadline.


Approximate Bayesian Inference for a Mechanistic Model of Vesicle Release at a Ribbon Synapse

Neural Information Processing Systems

The inherent noise of neural systems makes it difficult to construct models which accurately capture experimental measurements of their activity. While much research has been done on how to efficiently model neural activity with descriptive models such as linear-nonlinear-models (LN), Bayesian inference for mechanistic models has received considerably less attention. One reason for this is that these models typically lead to intractable likelihoods and thus make parameter inference difficult. Here, we develop an approximate Bayesian inference scheme for a fully stochastic, biophysically inspired model of glutamate release at the ribbon synapse, a highly specialized synapse found in different sensory systems. The model translates known structural features of the ribbon synapse into a set of stochastically coupled equations.